Abstract:The increasing demand for spatial audio in applications such as virtual reality, immersive media, and spatial audio research necessitates robust solutions to generate binaural audio data sets for use in testing and validation. Binamix is an open-source Python library designed to facilitate programmatic binaural mixing using the extensive SADIE II Database, which provides Head Related Impulse Response (HRIR) and Binaural Room Impulse Response (BRIR) data for 20 subjects. The Binamix library provides a flexible and repeatable framework for creating large-scale spatial audio datasets, making it an invaluable resource for codec evaluation, audio quality metric development, and machine learning model training. A range of pre-built example scripts, utility functions, and visualization plots further streamline the process of custom pipeline creation. This paper presents an overview of the library's capabilities, including binaural rendering, impulse response interpolation, and multi-track mixing for various speaker layouts. The tools utilize a modified Delaunay triangulation technique to achieve accurate HRIR/BRIR interpolation where desired angles are not present in the data. By supporting a wide range of parameters such as azimuth, elevation, subject Impulse Responses (IRs), speaker layouts, mixing controls, and more, the library enables researchers to create large binaural datasets for any downstream purpose. Binamix empowers researchers and developers to advance spatial audio applications with reproducible methodologies by offering an open-source solution for binaural rendering and dataset generation. We release the library under the Apache 2.0 License at https://github.com/QxLabIreland/Binamix/
Abstract:Asthma is a chronic respiratory condition that affects millions of people worldwide. While this condition can be managed by administering controller medications through handheld inhalers, clinical studies have shown low adherence to the correct inhaler usage technique. Consequently, many patients may not receive the full benefit of their medication. Automated classification of inhaler sounds has recently been studied to assess medication adherence. However, the existing classification models were typically trained using data from specific inhaler types, and their ability to generalize to sounds from different inhalers remains unexplored. In this study, we adapted the wav2vec 2.0 self-supervised learning model for inhaler sound classification by pre-training and fine-tuning this model on inhaler sounds. The proposed model shows a balanced accuracy of 98% on a dataset collected using a dry powder inhaler and smartwatch device. The results also demonstrate that re-finetuning this model on minimal data from a target inhaler is a promising approach to adapting a generic inhaler sound classification model to a different inhaler device and audio capture hardware. This is the first study in the field to demonstrate the potential of smartwatches as assistive technologies for the personalized monitoring of inhaler adherence using machine learning models.
Abstract:The development of data-driven heart sound classification models has been an active area of research in recent years. To develop such data-driven models in the first place, heart sound signals need to be captured using a signal acquisition device. However, it is almost impossible to capture noise-free heart sound signals due to the presence of internal and external noises in most situations. Such noises and degradations in heart sound signals can potentially reduce the accuracy of data-driven classification models. Although different techniques have been proposed in the literature to address the noise issue, how and to what extent different noise and degradations in heart sound signals impact the accuracy of data-driven classification models remains unexplored. To answer this question, we produced a synthetic heart sound dataset including normal and abnormal heart sounds contaminated with a large variety of noise and degradations. We used this dataset to investigate the impact of noise and degradation in heart sound recordings on the performance of different classification models. The results show different noises and degradations affect the performance of heart sound classification models to a different extent; some are more problematic for classification models, and others are less destructive. Comparing the findings of this study with the results of a survey we previously carried out with a group of clinicians shows noise and degradations that are more detrimental to classification models are also more disruptive to accurate auscultation. The findings of this study can be leveraged to develop targeted heart sound quality enhancement approaches - which adapt the type and aggressiveness of quality enhancement based on the characteristics of noise and degradation in heart sound signals.